Goto

Collaborating Authors

 cooperative ai


Cooperative Resilience in Artificial Intelligence Multiagent Systems

Chacon-Chamorro, Manuela, Giraldo, Luis Felipe, Quijano, Nicanor, Vargas-Panesso, Vicente, González, César, Pinzón, Juan Sebastián, Manrique, Rubén, Ríos, Manuel, Fonseca, Yesid, Gómez-Barrera, Daniel, Perdomo-Pérez, Mónica

arXiv.org Artificial Intelligence

Resilience refers to the ability of systems to withstand, adapt to, and recover from disruptive events. While studies on resilience have attracted significant attention across various research domains, the precise definition of this concept within the field of cooperative artificial intelligence remains unclear. This paper addresses this gap by proposing a clear definition of `cooperative resilience' and outlining a methodology for its quantitative measurement. The methodology is validated in an environment with RL-based and LLM-augmented autonomous agents, subjected to environmental changes and the introduction of agents with unsustainable behaviors. These events are parameterized to create various scenarios for measuring cooperative resilience. The results highlight the crucial role of resilience metrics in analyzing how the collective system prepares for, resists, recovers from, sustains well-being, and transforms in the face of disruptions. These findings provide foundational insights into the definition, measurement, and preliminary analysis of cooperative resilience, offering significant implications for the broader field of AI. Moreover, the methodology and metrics developed here can be adapted to a wide range of AI applications, enhancing the reliability and effectiveness of AI in dynamic and unpredictable environments.


Joint Behavior and Common Belief

Friedenberg, Meir, Halpern, Joseph Y.

arXiv.org Artificial Intelligence

The past few years have seen an uptick of interest in studying cooperative AI, that is, AI systems that are designed to be effective at cooperating. Indeed, a number of influential researchers recently argued that "[w]e need to build a science of cooperative AI... progress towards socially valuable AI will be stunted unless we put the problem of cooperation at the centre of our research" [6]. One type of cooperative behavior is joint behavior, that is, collaboration scenarios where the success of the joint action is dependent on all agents doing their parts; one agent deviating can cause the efforts of others to be ineffective. The notion of joint behavior has been studied (in much detail) under various names such as "acting together", "teamwork", "collaborative plans", and "shared plans", and highly influential models of it were developed (see, e.g., [2, 4, 10, 11, 15, 24]). Efforts were also made to engineer some of these theories into real-world joint planning systems [23, 20].


Is diversity the key to collaboration? New AI research suggests so

#artificialintelligence

As artificial intelligence gets better at performing tasks once solely in the hands of humans, like driving cars, many see teaming intelligence as a next frontier. In this future, humans and AI are true partners in high-stakes jobs, such as performing complex surgery or defending from missiles. But before teaming intelligence can take off, researchers must overcome a problem that corrodes cooperation: humans often do not like or trust their AI partners. MIT Lincoln Laboratory researchers have found that training an AI model with mathematically "diverse" teammates improves its ability to collaborate with other AI it has never worked with before, in the card game Hanabi. Moreover, both Facebook and Google's DeepMind concurrently published independent work that also infused diversity into training to improve outcomes in human-AI collaborative games.


Global Big Data Conference

#artificialintelligence

As artificial intelligence gets better at performing tasks once solely in the hands of humans, like driving cars, many see teaming intelligence as a next frontier. In this future, humans and AI are true partners in high-stakes jobs, such as performing complex surgery or defending from missiles. But before teaming intelligence can take off, researchers must overcome a problem that corrodes cooperation: humans often do not like or trust their AI partners. MIT Lincoln Laboratory researchers have found that training an AI model with mathematically "diverse" teammates improves its ability to collaborate with other AI it has never worked with before, in the card game Hanabi. Moreover, both Facebook and Google's DeepMind concurrently published independent work that also infused diversity into training to improve outcomes in human-AI collaborative games.


Is diversity the key to collaboration? New AI research suggests so

#artificialintelligence

As artificial intelligence gets better at performing tasks once solely in the hands of humans, like driving cars, many see teaming intelligence as a next frontier. In this future, humans and AI are true partners in high-stakes jobs, such as performing complex surgery or defending from missiles. But before teaming intelligence can take off, researchers must overcome a problem that corrodes cooperation: humans often do not like or trust their AI partners. MIT Lincoln Laboratory researchers have found that training an AI model with mathematically "diverse" teammates improves its ability to collaborate with other AI it has never worked with before, in the card game Hanabi. Moreover, both Facebook and Google's DeepMind concurrently published independent work that also infused diversity into training to improve outcomes in human-AI collaborative games.


Adversarial Attacks in Cooperative AI

Fujimoto, Ted, Pedersen, Arthur Paul

arXiv.org Artificial Intelligence

Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation. If intelligent agents are to interact and work together to solve complex problems, methods that counter non-cooperative behavior are needed to facilitate the training of multiple agents. This is the goal of cooperative AI. Recent work in adversarial machine learning, however, shows that models (e.g., image classifiers) can be easily deceived into making incorrect decisions. In addition, some past research in cooperative AI has relied on new notions of representations, like public beliefs, to accelerate the learning of optimally cooperative behavior. Hence, cooperative AI might introduce new weaknesses not investigated in previous machine learning research. In this paper, our contributions include: (1) arguing that three algorithms inspired by human-like social intelligence introduce new vulnerabilities, unique to cooperative AI, that adversaries can exploit, and (2) an experiment showing that simple, adversarial perturbations on the agents' beliefs can negatively impact performance. This evidence points to the possibility that formal representations of social behavior are vulnerable to adversarial attacks.


Cooperative AI: machines must learn to find common ground

#artificialintelligence

A huddle at the 2017 United Nations Climate Change Conference, where attendees cooperated on mutually beneficial joint actions on climate.Credit: Sean Gallup/Getty Artificial-intelligence assistants and recommendation algorithms interact with billions of people every day, influencing lives in myriad ways, yet they still have little understanding of humans. Self-driving vehicles controlled by artificial intelligence (AI) are gaining mastery of their interactions with the natural world, but they are still novices when it comes to coordinating with other cars and pedestrians or collaborating with their human operators. The state of AI applications reflects that of the research field. It has long been steeped in a kind of methodological individualism. As is evident from introductory textbooks, the canonical AI problem is that of a solitary machine confronting a non-social environment. Historically, this was a sensible starting point.